100 research outputs found

    Information-theoretic analysis of content based identification for correlated data

    Full text link
    A number of different multimedia fingerprinting algorithms and identification techniques were proposed and analyzed recently. This paper presents a content identification setup for a class of multimedia data that can be modeled by the Gauss-Markov process. We advocate a constrained order statistics decoding scheme based on digital fingerprints extracted from correlated data to identify contents. Finally, we investigate the fundamental limits of the proposed setup by deriving bounds on the miss and false acceptance probabilities

    Performance Analysis of Content-Based Identification Using Constrained List-Based Decoding

    Full text link

    Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning

    Get PDF
    Purpose: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. Methods: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). Results: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21–14.81%) and FL-PL (CI:11.82–13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32–12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34–26.10%). Furthermore, the Mann–Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. Conclusion: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers

    Capacity-Security Analysis Of Data Hiding Technologies

    No full text
    In this paper we consider the problem of joint capacitysecurity analysis of data hiding technologies from the communications point of view. First, we formulate data hiding as an optimal encoding problem for different operational regimes, that include both robust digital watermarking and steganography. This provides the corresponding estimation of the hidden data statistics, as well as of the rates approaching embedding capacity. Secondly, we formulate the problem of blind stochastic hidden data detection based on the developed watermark statistics. Finally, we estimate the error of watermark detection and the variance of the watermark estimation that determine the system security

    Content-identification : towards capacity based on bounded distance decoder

    No full text
    In recent years, content identification based on digital fingerprinting attracts a lot of attention in different emerging applications. In this paper, we perform the information–theoretic analysis of finite length digital fingerprinting systems. We show that the identification capacity over the Binary Symmetric Channel (BSC) is achievable by using Bounded Distance Decoder (BDD)

    Content authentication and identification under informed attacks

    No full text
    We consider the problem of content identification and authentication based on digital content fingerprinting. Contrary to existing work in which the performance of these systems under blind attacks is analysed, we investigate the information-theoretic performance under informed attacks. In the case of binary content fingerprinting, in a blind attack, a probe is produced at random independently from the fingerprints of the original contents. Contrarily, informed attacks assume that the attacker might have some information about the original content and is thus able to produce a counterfeit probe that is related to an authentic fingerprint corresponding to an original item, thus leading to an increased probability of false acceptance. We demonstrate the impact of the ability of an attacker to create counterfeit items whose fingerprints are related to fingerprints of authentic items, and consider the influence of the length of the fingerprint on the performance of finite-length systems. Finally, the information-theoretic achieveble rate of content identification systems sustaining informed attacks is derived under asymptotic assumptions about the fingerprint length
    • …
    corecore